Structured tabular data exist across nearly all fields. Reasoning task over these data aims to answer questions or determine the truthiness of hypothesis sentences by understanding the semantic meaning of a table. While previous works have devoted significant efforts to the tabular reasoning task, they always assume there are sufficient labeled data. However, constructing reasoning samples over tables (and related text) is labor-intensive, especially when the reasoning process is complex. When labeled data is insufficient, the performance of models will suffer an unendurable decline. In this paper, we propose a unified framework for unsupervised complex tabular reasoning (UCTR), which generates sufficient and diverse synthetic data with complex logic for tabular reasoning tasks, assuming no human-annotated data at all. We first utilize a random sampling strategy to collect diverse programs of different types and execute them on tables based on a "Program-Executor" module. To bridge the gap between the programs and natural language sentences, we design a powerful "NL-Generator" module to generate natural language sentences with complex logic from these programs. Since a table often occurs with its surrounding texts, we further propose novel "Table-to-Text" and "Text-to-Table" operators to handle joint table-text reasoning scenarios. This way, we can adequately exploit the unlabeled table resources to obtain a well-performed reasoning model under an unsupervised setting. Our experiments cover different tasks (question answering and fact verification) and different domains (general and specific), showing that our unsupervised methods can achieve at most 93% performance compared to supervised models. We also find that it can substantially boost the supervised performance in low-resourced domains as a data augmentation technique. Our code is available at https://github.com/leezythu/UCTR.
translated by 谷歌翻译
In this work, we propose an ID-preserving talking head generation framework, which advances previous methods in two aspects. First, as opposed to interpolating from sparse flow, we claim that dense landmarks are crucial to achieving accurate geometry-aware flow fields. Second, inspired by face-swapping methods, we adaptively fuse the source identity during synthesis, so that the network better preserves the key characteristics of the image portrait. Although the proposed model surpasses prior generation fidelity on established benchmarks, to further make the talking head generation qualified for real usage, personalized fine-tuning is usually needed. However, this process is rather computationally demanding that is unaffordable to standard users. To solve this, we propose a fast adaptation model using a meta-learning approach. The learned model can be adapted to a high-quality personalized model as fast as 30 seconds. Last but not the least, a spatial-temporal enhancement module is proposed to improve the fine details while ensuring temporal coherency. Extensive experiments prove the significant superiority of our approach over the state of the arts in both one-shot and personalized settings.
translated by 谷歌翻译
In this work, we propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely FlowFace. Unlike most previous methods that focus on transferring the source inner facial features but neglect facial contours, our FlowFace can transfer both of them to a target face, thus leading to more realistic face swapping. Concretely, our FlowFace consists of a face reshaping network and a face swapping network. The face reshaping network addresses the shape outline differences between the source and target faces. It first estimates a semantic flow (i.e., face shape differences) between the source and the target face, and then explicitly warps the target face shape with the estimated semantic flow. After reshaping, the face swapping network generates inner facial features that exhibit the identity of the source face. We employ a pre-trained face masked autoencoder (MAE) to extract facial features from both the source face and the target face. In contrast to previous methods that use identity embedding to preserve identity information, the features extracted by our encoder can better capture facial appearances and identity information. Then, we develop a cross-attention fusion module to adaptively fuse inner facial features from the source face with the target facial attributes, thus leading to better identity preservation. Extensive quantitative and qualitative experiments on in-the-wild faces demonstrate that our FlowFace outperforms the state-of-the-art significantly.
translated by 谷歌翻译
Open Information Extraction (OpenIE) facilitates the open-domain discovery of textual facts. However, the prevailing solutions evaluate OpenIE models on in-domain test sets aside from the training corpus, which certainly violates the initial task principle of domain-independence. In this paper, we propose to advance OpenIE towards a more realistic scenario: generalizing over unseen target domains with different data distributions from the source training domains, termed Generalized OpenIE. For this purpose, we first introduce GLOBE, a large-scale human-annotated multi-domain OpenIE benchmark, to examine the robustness of recent OpenIE models to domain shifts, and the relative performance degradation of up to 70% implies the challenges of generalized OpenIE. Then, we propose DragonIE, which explores a minimalist graph expression of textual fact: directed acyclic graph, to improve the OpenIE generalization. Extensive experiments demonstrate that DragonIE beats the previous methods in both in-domain and out-of-domain settings by as much as 6.0% in F1 score absolutely, but there is still ample room for improvement.
translated by 谷歌翻译
Although substantial efforts have been made using graph neural networks (GNNs) for AI-driven drug discovery (AIDD), effective molecular representation learning remains an open challenge, especially in the case of insufficient labeled molecules. Recent studies suggest that big GNN models pre-trained by self-supervised learning on unlabeled datasets enable better transfer performance in downstream molecular property prediction tasks. However, they often require large-scale datasets and considerable computational resources, which is time-consuming, computationally expensive, and environmentally unfriendly. To alleviate these limitations, we propose a novel pre-training model for molecular representation learning, Bi-branch Masked Graph Transformer Autoencoder (BatmanNet). BatmanNet features two tailored and complementary graph autoencoders to reconstruct the missing nodes and edges from a masked molecular graph. To our surprise, BatmanNet discovered that the highly masked proportion (60%) of the atoms and bonds achieved the best performance. We further propose an asymmetric graph-based encoder-decoder architecture for either nodes and edges, where a transformer-based encoder only takes the visible subset of nodes or edges, and a lightweight decoder reconstructs the original molecule from the latent representation and mask tokens. With this simple yet effective asymmetrical design, our BatmanNet can learn efficiently even from a much smaller-scale unlabeled molecular dataset to capture the underlying structural and semantic information, overcoming a major limitation of current deep neural networks for molecular representation learning. For instance, using only 250K unlabelled molecules as pre-training data, our BatmanNet with 2.575M parameters achieves a 0.5% improvement on the average AUC compared with the current state-of-the-art method with 100M parameters pre-trained on 11M molecules.
translated by 谷歌翻译
Reinforcement learning (RL) operating on attack graphs leveraging cyber terrain principles are used to develop reward and state associated with determination of surveillance detection routes (SDR). This work extends previous efforts on developing RL methods for path analysis within enterprise networks. This work focuses on building SDR where the routes focus on exploring the network services while trying to evade risk. RL is utilized to support the development of these routes by building a reward mechanism that would help in realization of these paths. The RL algorithm is modified to have a novel warm-up phase which decides in the initial exploration which areas of the network are safe to explore based on the rewards and penalty scale factor.
translated by 谷歌翻译
传统的城市规划要求城市专家在许多建筑限制下花费大量时间和精力制定最佳的城市计划。深层生成学习的非凡富有想象力为翻新城市规划提供了希望。尽管已经检查了自动化的城市规划师,但由于以下情况,它们受到限制:1)忽略人类在城市规划中的要求; 2)省略城市规划中的空间层次结构,以及3)缺乏许多城市计划数据样本。为了克服这些局限性,我们提出了一个新颖的,深厚的人类建筑的城市规划师。在初步工作中,我们将其提出为编码器范式。编码器是学习周围环境,人类指示和土地使用配置的信息分布。解码器是重建土地使用配置和相关的城市功能区域。重建过程将捕获功能区和空间网格之间的空间层次结构。同时,我们引入了一种变异的高斯机制来减轻数据稀疏问题。即使早期的工作导致了良好的结果,但生成的性能仍然不稳定,因为捕获空间层次结构的方式可能会导致不清楚的优化方向。在此期刊版本中,我们提出了一个基于生成的对抗网络(GAN)的层叠的深层生成框架,以解决此问题,灵感来自城市专家的工作流程。特别是,第一个gan的目的是根据人类指示和周围环境的信息来建立城市功能区域。第二个GAN将基于已构造的功能区域产生土地使用构型。此外,我们为增强数据样本提供了调节增强模块。最后,我们进行了广泛的实验以验证工作的功效。
translated by 谷歌翻译
单视点云完成旨在仅基于有限的观察结果来恢复对象的完整几何形状,这由于数据稀疏性和遮挡而非常困难。核心挑战是生成合理的几何形状,以基于部分扫描的局部扫描填充对象的未观察到的部分,该部分受限制不足,并且具有巨大的解决方案空间。受计算机图形中经典的影子音量技术的启发,我们提出了一种有效减少解决方案空间的新方法。我们的方法认为摄像机是向物体投射射线的光源。这样的光线建立了一个合理的约束但表达式的基础,以完成。然后将完成过程作为点位移优化问题进行配制。点在部分扫描处初始化,然后将每个点的两种运动类型移至目标位置:沿光线射线的方向运动和限制局部运动以进行形状细化。我们设计神经网络以预测理想点运动以获得完成结果。我们证明,通过详尽的评估和比较,我们的方法是准确,健壮和可推广的。此外,在MVP数据集上,它在定性和定量上优于最先进的方法。
translated by 谷歌翻译
图神经网络(GNN)从材料科学家那里引起了越来越多的关注,并证明了建立结构和属性之间的连接的高能力。但是,只有仅提供的未删除结构作为输入,很少有GNN模型可以预测带有可接受的误差水平的放松配置的热力学特性。在这项工作中,我们开发了基于Dimenet ++和混合密度网络的多任务(MT)体系结构,以提高此类任务的性能。将基于CU的单原子合金催化剂的共吸附作为例证,我们表明我们的方法可以可靠地估计CO的吸附能,其平均绝对误差为0.087 eV,从初始CO的吸附结构中,而无需昂贵的第一原则计算。此外,与其他最先进的GNN方法相比,我们的模型在预测具有看不见的底物表面或掺杂物种的催化性能时具有提高的概括能力。我们表明,拟议的GNN策略可以促进催化剂发现。
translated by 谷歌翻译
本文旨在通过探索基于神经网络的方法(称为Sun)中的内在不确定性来提高文本到SQL解析的性能。从数据不确定性的角度来看,可以从多个语义等效的问题中学到单个SQL。从以前仅限于一对一映射的方法中不同,我们提出了一个数据不确定性限制来探索潜在的互补语义语义多个语义等效问题(多对一)中的信息,并以减少的虚假关联来学习稳健的特征表示。通过这种方式,我们可以降低学习表示的敏感性并改善解析器的鲁棒性。从模型的不确定性角度来看,神经网络的权重之间通常存在结构信息(依赖性)。为了提高神经文本到SQL解析器的普遍性和稳定性,我们提出了模型不确定性约束,以通过强制执行不同扰动编码网络的输出表示形式来完善查询表示形式,以使其彼此一致。在五个基准数据集上进行的广泛实验表明,我们的方法显着优于强大的竞争对手,并实现了新的最新结果。为了获得可重复性,我们在https://github.com/alibabaresearch/damo-convai/tree/main/main/sunsql上发布代码和数据。
translated by 谷歌翻译